1,812 research outputs found
Interactive visualization for information analysis in medical diagnosis
This paper investigates to what extend the findings and solutions of information analysis in intelligence analysis can be applied and transferred into the medical diagnosis domains. Interactive visualization is proposed to address some of the problems faced by both domain. Its design issues related to selected common problems are then discussed in details. Finally, a visual sense making system INVISQUE is used as an example to illustrate how the interactive visualization can be used to support information analysis and medical diagnosis
Recommended from our members
Explainable AI: The new 42?
Explainable AI is not a new field. Since at least the early exploitation of C.S. Pierce’s abductive reasoning in expert systems of the 1980s, there were reasoning architectures to support an explanation function for complex AI systems, including applications in medical diagnosis, complex multi-component design, and reasoning about the real world. So explainability is at least as old as early AI, and a natural consequence of the design of AI systems. While early expert systems consisted of handcrafted knowledge bases that enabled reasoning over narrowly well-defined domains (e.g., INTERNIST, MYCIN), such systems had no learning capabilities and had only primitive uncertainty handling. But the evolution of formal reasoning architectures to incorporate principled probabilistic reasoning helped address the capture and use of uncertain knowledge.
There has been recent and relatively rapid success of AI/machine learning solutions arises from neural network architectures. A new generation of neural methods now scale to exploit the practical applicability of statistical and algebraic learning approaches in arbitrarily high dimensional spaces. But despite their huge successes, largely in problems which can be cast as classification problems, their effectiveness is still limited by their un-debuggability, and their inability to “explain” their decisions in a human understandable and reconstructable way. So while AlphaGo or DeepStack can crush the best humans at Go or Poker, neither program has any internal model of its task; its representations defy interpretation by humans, there is no mechanism to explain their actions and behaviour, and furthermore, there is no obvious instructional value.. the high performance systems can not help humans improve. Even when we understand the underlying mathematical scaffolding of current machine learning architectures, it is often impossible to get insight into the internal working of the models; we need explicit modeling and reasoning tools to explain how and why a result was achieved. We also know that a significant challenge for future AI is contextual adaptation, i.e., systems that incrementally help to construct explanatory models for solving real-world problems. Here it would be beneficial not to exclude human expertise, but to augment human intelligence with artificial intelligence
A comparison of new measurements of total monoterpene flux with improved measurements of speciated monoterpene flux
International audienceMany monoterpenes have been identified in forest emissions using gas chromatography (GC). Until now, it has been impossible to determine whether all monoterpenes are appropriately measured using GC techniques. We used a proton transfer reaction mass spectrometer (PTR-MS) coupled with the eddy covariance (EC) technique to measure mixing ratios and fluxes of total monoterpenes above a ponderosa pine plantation. We compared PTR-MS-EC results with simultaneous measurements of eight speciated monoterpenes, ?-pinene, ?-pinene, 3-carene, d-limonene, ?-phellandrene, ?-terpinene, camphene, and terpinolene, made with an automated, in situ gas chromatograph with flame ionization detectors (GC-FID), coupled to a relaxed eddy accumulation system (REA). Monoterpene mixing ratios and fluxes measured by PTR-MS averaged 30±2.3% and 31±9.2% larger than by GC-FID, with larger mixing ratio discrepancies between the two techniques at night than during the day. Two unidentified peaks that correlated with ?-pinene were resolved in the chromatograms and completely accounted for the daytime difference and reduced the nighttime mixing ratio difference to 20±2.9%. Measurements of total monoterpenes by PTR-MS-EC indicated that GC-FID-REA measured the common, longer-lived monoterpenes well, but that additional terpenes were emitted from the ecosystem that represented an important contribution to the total mixing ratio above the forest at night
Extreme 13C depletion of CCl2F2 in firn air samples from NEEM, Greenland
A series of 12 high volume air samples collected from the S2 firn core during the North Greenland Eemian Ice Drilling (NEEM) 2009 campaign have been measured for mixing ratio and stable carbon isotope composition of the chlorofluorocarbon CFC-12 (CCl2F2). While the mixing ratio measurements compare favorably to other firn air studies, the isotope results show extreme 13C depletion at the deepest measurable depth (65 m), to values lower than d13C = -80‰ vs. VPDB (the international stable carbon isotope scale), compared to present day surface tropospheric measurements near -40‰. Firn air modeling was used to interpret these measurements. Reconstructed atmospheric time series indicate even larger depletions (to -120‰) near 1950 AD, with subsequent rapid enrichment of the atmospheric reservoir of the compound to the present day value. Mass-balance calculations show that this change is likely to have been caused by a large change in the isotopic composition of anthropogenic CFC-12 emissions, probably due to technological advances in the CFC production process over the last 80 yr, though direct evidence is lacking
Observations of oxidation products above a forest imply biogenic emissions of very reactive compounds
International audienceVertical gradients of mixing ratios of volatile organic compounds have been measured in a Ponderosa pine forest in Central California (38.90° N, 120.63° W, 1315m). These measurements reveal large quantities of previously unreported oxidation products of short lived biogenic precursors. The emission of biogenic precursors must be in the range of 13-66µmol m-2h-1 to produce the observed oxidation products. That is 6-30 times the emissions of total monoterpenes observed above the forest canopy on a molar basis. These reactive precursors constitute a large fraction of biogenic emissions at this site, and are not included in current emission inventories. When oxidized by ozone they should efficiently produce secondary aerosol and hydroxyl radicals
Observations of oxidation products above a forest imply biogenic emissions of very reactive compounds
International audienceMeasurements of volatile organic compounds in a pine forest (Central California, 38.90° N, 120.63° W, 1315 m) reveal large quantities of previously unreported oxidation products of short lived biogenic precursors. The emission of biogenic precursors must be in the range of 13?66 µmol m?2 h?1 to produce the observed oxidation products. That is 6?30 times the emissions of total monoterpenes observed above the forest canopy on a molar basis. These reactive precursors constitute the largest fraction of biogenic emissions at this site, and are not included in current emission inventories. When oxidized by ozone they should efficiently produce secondary aerosol and hydroxyl radicals
Recommended from our members
On the challenges and opportunities in visualization for machine learning and knowledge extraction: A research agenda
We describe a selection of challenges at the intersection of machine learning and data visualization and outline a subjective research agenda based on professional and personal experience. The unprecedented increase in the amount, variety and the value of data has been significantly transforming the way that scientific research is carried out and businesses operate. Within data science, which has emerged as a practice to enable this data-intensive innovation by gathering together and advancing the knowledge from fields such as statistics, machine learning, knowledge extraction, data management, and visualization, visualization plays a unique and maybe the ultimate role as an approach to facilitate the human and computer cooperation, and to particularly enable the analysis of diverse and heterogeneous data using complex computational methods where algorithmic results are challenging to interpret and operationalize. Whilst algorithm development is surely at the center of the whole pipeline in disciplines such as Machine Learning and Knowledge Discovery, it is visualization which ultimately makes the results accessible to the end user. Visualization thus can be seen as a mapping from arbitrarily high-dimensional abstract spaces to the lower dimensions and plays a central and critical role in interacting with machine learning algorithms, and particularly in interactive machine learning (iML) with including the human-in-the-loop. The central goal of the CD-MAKE VIS workshop is to spark discussions at this intersection of visualization, machine learning and knowledge discovery and bring together experts from these disciplines. This paper discusses a perspective on the challenges and opportunities in this integration of these discipline and presents a number of directions and strategies for further research
- …